home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
BBS Toolkit
/
BBS Toolkit.iso
/
gt_power
/
gttutors.zip
/
GTTUTOR4.TXT
< prev
next >
Wrap
Text File
|
1987-06-22
|
21KB
|
345 lines
*****************************************************************
Part four in a series of tutorials concerning communications and
the micro-computer industry. This particular tutorial was
created during a call from Mr. Bruce Aldrich and deals with the
concepts of multi-tasking and the future of Local Area Networks.
*****************************************************************
A: James, a few questions regarding the future of PC multi-
tasking.
1) Are you familiar with a new software product for the 386
called PC-MOS (I believe)?
2) Are you familiar with DRI's Concurrent DOS (as well as a
just-out update by a different name which currently escapes me)?
Obviously, I am interested in exploring the area of multi-
tasking.
D: If you will permit me, I would like to digress and establish
some fundamentals before answering directly. I wish to talk
about what has happened in the micro-computer world, and why, in
order to develop a reasonable insight into where it is going.
About the time of the introduction of micro-computers into the
marketplace the manufacturers had recognized a long term trend
that needed to be continued: about every four years the industry
was providing three times the prior performance/capacity for
price increases of only about double the prior cost - unit costs
were decreasing.. The trend had taken computers into the early
seventies in three broad forms:
1) The largest form were (and still are) called mainframes.
Entry level cost was about a million dollars. For several
million you could get a mainframe that could support several
thousand simultaneous users of that system. It cost another
million or so per year to maintain that system (air conditioning,
raised floor and other capital cost amortizations, systems
engineers, programmers, librarians, etc.).
2) For an order of magnitude less money, about one-hundred
thousand dollars, you could buy a relatively small mini-computer.
For several hundred thousand dollars you could buy a mini-
computer that could support several hundred simultaneous users on
that system. It took about a hundred thousand dollars a year to
support the initial investment (environment, supplies,
programmers, operators, etc.).
3) Finally, an order of magnitude less was required to purchase
a micro-computer. Unlike the other two kinds of systems, the
micro-computers could not support multiple users. On the other
hand, unlike the other kinds of computers, it did not take
thousands of dollars per year to maintain such a computer. There
were no environmental requirements. There were no major
programmer or operator costs. In fact, what you could buy for
about $10,000 was a single user computer. Thus, they became
known as personal computers.
Well, that all may have been obvious, but looking under the
covers, so to speak, it is interesting to note that the
manufacturers of these micro-computers did not expect the
business community to be interested in them. Their real intent
in bringing these machines to market was to get sufficient
manufacturing experience that they could quickly get in front of
the efficiency ramp up curve and, through volume sales, to
generate economies of manufacturing scale such that they could
manufacture yet another generation of computers below the micro-
computer level and to do so not for one order of magnitude price
decrease, but two! They wanted to be able to produce and sell
micro-computers (chips, actually) that would be priced not at
$1,000 each, but at something closer to $100. The layman seems
not to appreciate the fact that they were completely successful
in their efforts. You cannot buy a $30,000 automobile today that
does not have at least one $25 computer in it.
To get the volume of sales high, the manufactures introduced
those computers as game players for home use. That generated the
initial bad name for micros and the initial resistance by
business to use them. Who would risk part of his business career
on a decision to use game players for serious work?
An interesting thing happened soon after these micro-computers
began to sell in volume; they were found to be MORE reliable than
their larger cousins - of course, they had much less complicated
circuitry, many fewer components, and required low power.
Further, and this is the MOST IMPORTANT DEVELOPMENT OF ALL, there
emerged an INDUSTRY STANDARD called the S-100 bus. This
permitted many different manufacturers to enter the marketplace
with new peripherals, clones, and most important, with software
that ran on almost all of these early 8-bit machines.
As clever engineers recognized that these machines were able to
perform several hundred thousand machine instructions per second
and the vast majority of that capability was not being used, they
looked for ways to do so. Let me explain that a little more. In
these early machines the typical computer program (game or
wordprocessor) was doing something like this: I need a character
from the keyboard, I'll see if one has been typed. No, nothing
yet. Well, I need a character from the keyboard, I'll see if one
has been typed. No, ..." In other words, though it could
perform several hundred thousand instructions per second it was
almost always crawling along at the speed of the human operator's
typing skill - SLOW!
The engineers built clever new programs that would control
several different programs at the same time in memory. Of course
these were only able to run one at a time, but whenever one of
those programs needed to read a character from the keyboard and
find that none had yet been typed, or need to print a character
to the screen or a printer only to find that the output device
was not yet ready to print it, the program would be temporarily
stopped and the system made to go to the next program in memory
to see if it was capable of doing some work. This process of
appearing to run more than one program at a time is called multi-
tasking. In human terms it was extremely fast. The result, the
computers were performing much more work than they had prior to
multi-tasking.
But there were some real limits to this approach. After all,
there was only one central processing unit (CPU) and at most you
could have only 64K of memory that had to be shared by the
programs.
Remember I just mentioned that there was an important industry
standard that emerged from the mass sale of micros called the S-
100 bus? Well this was the enabling event that made the next
method of increasing the performance of these small machines. S-
100 bus cards were introduced that contained on the single card
both a separate CPU and 64K of memory that was dedicated to that
CPU. These cards became known as 'slave systems' and were
plugged into the same S-100 bus as the original CPU and memory.
(A 'bus' is merely a set of parallel wires with connectors on
them that allow every card plugged into them to have exactly the
same data available to each of its pins as do all the other cards
connected to those lines.) The original CPU became known as the
'master system' or 'server' and became responsible for allocating
work out to each of the other CPUs on the bus (that is why they
were called 'slaves'). In this way a great deal more performance
was made available from that micro than previously. I must add
that it also meant that a special and more complicated Operating
System was required as well.
What I just described was the beginning of what the industry
called 'tightly coupled' systems. These systems had common
access to all the system memory and all the peripheral devices
attached to the system. Further, they also had a terminal
attached to each of the slave CPUs and, thus, several
simultaneous users of the system was now a reality. Recall, only
a few years earlier it would have cost several hundred thousand
dollars worth of computer in order to support more than one
simultaneous user of a computer. This was a breakthrough that
brought serious attention from the business community.
Looking more closely at those configurations, what was happening
was that all of the expensive devices that once were being
dedicated to only a single user were now being shared. Printers
and disks are the most obvious examples. Along with sharing of
devices came the obvious problems of controlling shared devices.
It would not do at all if two of the users wanted to print
reports at the same time and having, say, only one printer, being
allowed to do that. Thus was born buffers and sequencers that
made the user think he was printing but which in reality were re-
directing output destined to a printer and putting it on disk
someplace to await a time when the printer was no longer being
used by another user. These were called 'spoolers' and the
concept is fundamental to the successful sharing of printers even
in the mainframes today. Disk sharing brought with it a more
subtle form of problem; the possibility that two users might
inadvertently corrupt the information required by each other.
For example, assume that there is a client file on a disk drive
that contains only one record in it and that that record contains
the current balance due by that client to the firm. Suppose that
an accounts payable clerk is operating one terminal and wants to
post a $1,000 payment received just as a sales entry clerk tries
to post a new credit purchase of $500. Finally, assume the
record starts with a balance due of $2,000. If both of these
clerks happen to read the current balance due at almost the same
time they would put into their memory the record that says that
they are starting with a balance due of $2,000. Let's say that
the sales entry clerk posts the new purchase of $500 to that
record and saves the resulting $2,500 record back onto the disk.
Then the accounts receivable clerk posts the check received of
$1,000 to her copy of the current balance (which still says
$2,000) and then saves the resulting new balance due of $1,000
back to the disk (right on top of the old one). The obvious
result is that the record of the new sales has been lost for when
either of these clerks next reads the disk record it will show
$1,000 rather than $1,500 as it should. This insidious problem
is solved with the implementation of what is called file and
record interlocks. With these functioning properly no more than
one person may ever have the ability to modify a disk record at a
time. Unfortunately, even today, most software does not consider
that this might be a problem and does NOT use file and record
interlocks!
Along came the 16 bit machines now generically called the PC's
(because IBM called their machines the Personal Computers rather
than game players - good move!). Besides being faster CPUs,
these machines broke through the 64K maximum memory barrier and
thus had much more satisfactory performance available for both
multi-tasking as well as multi-use of the computers. Though they
were much better equipped to handle these performance oriented
uses of the computer, the software did not support it! Indeed,
the user community was set back almost four years as a result of
the failure of the hardware industry to work closely with the
software industry in order to meet the needs of their users.
Many believe it was a conscious effort designed to sell as much
'iron' as they could before the prices of that equipment fell due
to competition. Remember, in a single user environment there are
no shared devices. Every user that wanted to print something had
to buy a printer.
For several years after the introduction of the 16 bit micro-
computers the manufacturers of the 'old' 8-bit machines continued
to advance their capabilities. As I mentioned earlier, the
ability to support multiple users at the same time had been
introduced BEFORE the 16-bit machines were even announced.
Development continued in a new direction, however. It was
already seen that they had reached certain limits in terms of how
many cards could be plugged into the same S-100 bus and be
expected not to dramatically impact the performance of that
single bus due to conflicts and simultaneous demands of it. The
new approach was called 'loosely coupled' systems. In this
method of sharing of resources (notably printers and disk
devices) complete computers were tied together via coax cable and
elegant software. Messages were routed from one computer to the
next over those cables. Sometimes they were arranged in loops or
rings, without 'ends'. Sometimes they were arranged with a
central computer in the center and the remotes at the ends of
cables as 'stars or 'spokes'. Whatever the configuration of the
cabling, the result was that a user of any of the computers
connected to the cable could send print jobs to shared printers
(spooling as necessary) and read or write files from disk drives
that were located on other computers (typically the master or
server). And as you might expect, before long there evolved the
ability to connect one such NETWORK of computers to another one
that was located either near by or sometimes several thousand
miles away via telephone connections (these connections were
called 'gateways').
And what were the bigger 16-bit computer manufactures doing all
this while? They were selling a lot of micro-computers (based
largely on their names (IBM, DEC, WANG)). Finally, several
companies that had pioneered the development of multi-user
capabilities in the 8-bit machines and network component
manufactures upgraded to the 16-bit machines. 3COM, Novell, and
several others announced NETWORK capability for the PC's (as if
it was the best thing since night baseball and something new).
It was not met with great enthusiasm by the business community.
Primarily since IBM and the other computer manufacturers had been
so successful in convincing these buyers that a machine on every
desk was the wave of the future and because they did not have
their own networking capability at the time, it was pronounced
'premature' to leap into a potentially 'non-standard',
potentially 'dead-ended' approach such as were being offered by
these network manufacturers. IBM soon announced their own
network architecture which was the worst of the bunch and a few
years later they admitted that there was a better way and 'pre-
announced' exactly how businesses should wire their buildings to
prepare for their new network architecture. It turns out that
that architecture changed after IBM's more loyal clients had done
as IBM recommended, but that is another story all together.
The message I am trying to get to is that it is still pre-mature
to bet on a specific network architecture (loosely couple
systems). Further, tightly coupled systems have been TOTALLY
ignored by IBM as they are simply too efficient and do not result
in other than incremental hardware sales. Finally, with the
introduction of Intel's 80286 CPU the 16-bit micro-computers were
able to very efficiently operate in multi-tasking mode (remember
how it all started with the 8-bit machines).
IBM did not make software available to run multi-tasking on their
machines that was compatible with the existing software (PC-DOS).
Instead, they 'introduced' UNIX as the software that permitted
multi-tasking of their machines. Indeed, there were several
'flavors' of this massive and highly inefficient Operating System
that soon became available (ZENIX is a UNIX derivative or clone).
In other words, the industry as led by IBM began to forget the
value of standards and started pushing software that was
incompatible and which not-coincidentally required major
increases to the amount of memory and disk space available to
then in order to operate. That strategy has not been well
received and PC-DOS and MS-DOS are still by far the dominant
Operating Systems on the PC/XT/AT micro-computers. Multi-tasking
needs are usually supported via off-brand software houses such as
DoubleDos, DESQview, and Multi-Link. IBM introduced, late and
poorly done, and inefficient as usual, a multi-tasker called
TopView, described by IBM as an emerging standard, but in reality
a failure in the marketplace. Other multi-taskers are also
available today such as Windows.
And then came the super micros which use the Intel 80386 CPU.
This machine can often run as much as 17 times faster than IBM's
original PC and can support a dozen megabytes of memory and more.
It is clearly the mainframe in miniature dreamed about only a few
years ago. And still there is no standard available for multi-
tasking, for tightly coupled or loosely coupled multi-user needs,
and one full generation of CPU (the 286) has had capabilities
that are important (virtual memory and protected software shells)
that may never be supported with software. That is, the 286
family of machines exist almost exclusively as faster PC's and so
too do the 386's installed.
Why? Well perhaps the fact that IBM announced the PS/2 (Personal
System/2) family of micro-computers has something to do with
that. Perhaps in their zeal to force proprietary 3 1/2" floppy
disks onto the public in order to force the clone manufactures
out of business they failed to consider the needs of the existing
computer owners. Perhaps IBM has all the answers and is about to
bring them to the market after all.
Which reminds me of the story of the most unfortunate woman who
had been married three times and still claimed to be a virgin.
Asked how that was possible she answered that the first time she
married she was young and so was her husband. On the wedding
night they had partied a bit too much and after a tragic car
accident she was left widowed and the marriage had not been
consummated. The second time she married it was her decision to
play it safe. She married an older man who was financially
secure and stable - didn't even drink. Unfortunately he was a
bit too old and on their wedding night he died of a heart attack
just after she had removed her clothes - widowed for a second
time without consummating the relationship. The third time she
married an IBM salesman. He was in his mid twenties, healthy,
good looking, and apparently eager. They had been married for
six months before she filed for divorce as the marriage remained
unconsummated. Asked why she said that every night it would be
the same old thing; he would get into bed and tell her how good
it was going to be.
Yes, Bruce, I have heard of the products you mentioned. In my
opinion you should be more interested in the established Local
Area Networks than in these products to satisfy the longer range
needs of your client. 3COM Plus or Novell are the leading
contenders and they both have highly reliable and highly
efficient LAN capabilities. Multi-tasking is supported only on
the server and is reserved for the support of the slaves, not
end-user work. Tightly coupled systems are more cost effective
than LAN but suffer from a lack of standards and finite (small)
number of simultaneous users. Wish I could be more helpful.